深入学习模型的压缩在将这些模型部署到边缘设备方面具有根本重要性。在压缩期间,在压缩期间结合硬件模型和应用限制可以最大限度地提高优势,但使其专为一种情况而设计。因此,压缩需要自动化。搜索最佳压缩方法参数被认为是一个优化问题。本文介绍了一种多目标硬件感知量化(MohaQ)方法,其将硬件效率和推理误差视为混合精度量化的目标。该方法通过依赖于两个步骤,在很大的搜索空间中评估候选解决方案。首先,应用训练后量化以进行快速解决方案评估。其次,我们提出了一个名为“基于信标的搜索”的搜索技术,仅在搜索空间中重新选出所选解决方案,并将其用作信标以了解刷新对其他解决方案的影响。为了评估优化潜力,我们使用Timit DataSet选择语音识别模型。该模型基于简单的复发单元(SRU),由于其相当大的加速在其他复发单元上。我们应用了我们在两个平台上运行的方法:SILAGO和BETFUSION。实验评估表明,SRU通过训练后量化可以压缩高达8倍,而误差的任何显着增加,误差只有1.5个百分点增加。在Silago上,唯一的搜索发现解决方案分别实现了最大可能加速和节能的80 \%和64 \%,错误的误差增加了0.5个百分点。在BETFUSION上,对于小SRAM尺寸的约束,基于信标的搜索将推断搜索的错误增益减少4个百分点,并且与BitFusion基线相比,可能的达到的加速度增加到47倍。
translated by 谷歌翻译
Artificial Intelligence (AI) has become commonplace to solve routine everyday tasks. Because of the exponential growth in medical imaging data volume and complexity, the workload on radiologists is steadily increasing. We project that the gap between the number of imaging exams and the number of expert radiologist readers required to cover this increase will continue to expand, consequently introducing a demand for AI-based tools that improve the efficiency with which radiologists can comfortably interpret these exams. AI has been shown to improve efficiency in medical-image generation, processing, and interpretation, and a variety of such AI models have been developed across research labs worldwide. However, very few of these, if any, find their way into routine clinical use, a discrepancy that reflects the divide between AI research and successful AI translation. To address the barrier to clinical deployment, we have formed MONAI Consortium, an open-source community which is building standards for AI deployment in healthcare institutions, and developing tools and infrastructure to facilitate their implementation. This report represents several years of weekly discussions and hands-on problem solving experience by groups of industry experts and clinicians in the MONAI Consortium. We identify barriers between AI-model development in research labs and subsequent clinical deployment and propose solutions. Our report provides guidance on processes which take an imaging AI model from development to clinical implementation in a healthcare institution. We discuss various AI integration points in a clinical Radiology workflow. We also present a taxonomy of Radiology AI use-cases. Through this report, we intend to educate the stakeholders in healthcare and AI (AI researchers, radiologists, imaging informaticists, and regulators) about cross-disciplinary challenges and possible solutions.
translated by 谷歌翻译
Our goal with this survey is to provide an overview of the state of the art deep learning technologies for face generation and editing. We will cover popular latest architectures and discuss key ideas that make them work, such as inversion, latent representation, loss functions, training procedures, editing methods, and cross domain style transfer. We particularly focus on GAN-based architectures that have culminated in the StyleGAN approaches, which allow generation of high-quality face images and offer rich interfaces for controllable semantics editing and preserving photo quality. We aim to provide an entry point into the field for readers that have basic knowledge about the field of deep learning and are looking for an accessible introduction and overview.
translated by 谷歌翻译
Transformer models have achieved superior performance in various natural language processing tasks. However, the quadratic computational cost of the attention mechanism limits its practicality for long sequences. There are existing attention variants that improve the computational efficiency, but they have limited ability to effectively compute global information. In parallel to Transformer models, state space models (SSMs) are tailored for long sequences, but they are not flexible enough to capture complicated local information. We propose SPADE, short for $\underline{\textbf{S}}$tate s$\underline{\textbf{P}}$ace $\underline{\textbf{A}}$ugmente$\underline{\textbf{D}}$ Transform$\underline{\textbf{E}}$r. Specifically, we augment a SSM into the bottom layer of SPADE, and we employ efficient local attention methods for the other layers. The SSM augments global information, which complements the lack of long-range dependency issue in local attention methods. Experimental results on the Long Range Arena benchmark and language modeling tasks demonstrate the effectiveness of the proposed method. To further demonstrate the scalability of SPADE, we pre-train large encoder-decoder models and present fine-tuning results on natural language understanding and natural language generation tasks.
translated by 谷歌翻译
Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare.
translated by 谷歌翻译
我们为对抗性多机器人群众跨任务中的决策制定开发了一个有弹性的二进制假设测试框架。该框架利用机器人之间的随机信任观察,以在集中式融合中心(FC)中得出可进行的弹性决策,即使I)在网络中存在恶意机器人,其数量可能大于合法机器人的数量,并且II )FC使用所有机器人的一次性噪声测量。我们得出两种算法来实现这一目标。第一个是两个阶段方法(2SA),该方法基于收到的信任观察估算机器人的合法性,并证明在最严重的恶意攻击中可最大程度地减少检测错误的可能性。在这里,恶意机器人的比例是已知但任意的。对于不明的恶意机器人,我们开发了对抗性的广义似然比测试(A-GLRT),该测试(A-GLRT)都使用报告的机器人测量和信任观察来估计机器人的可信赖性,其报告策略以及同时的正确假设。我们利用特殊的问题结构表明,尽管有几个未知的问题参数,但这种方法仍然可以计算处理。我们在硬件实验中部署了这两种算法,其中一组机器人会在模拟道路网络上进行交通状况的人群,但仍会受到SYBIL攻击的方式。我们从实际通信信号中提取每个机器人的信任观察结果,这些信号提供有关发件人独特性的统计信息。我们表明,即使恶意机器人在大多数情况下,FC也可以将检测误差的可能性降低到2SA和A-GLRT的30.5%和29%。
translated by 谷歌翻译
全球地球观察(EO)的运营能力不断增长为数据驱动的方法创造了新的机会,以理解和保护我们的星球。但是,由于巨大的档案尺寸和EO平台提供的有限的勘探功能,目前使用EO档案的使用受到了极大的限制。为了解决这一限制,我们最近提出了米兰,这是一种基于内容的图像检索方法,用于在卫星图像档案中快速相似性搜索。米兰是基于公制学习的深层哈希网络,将高维图像特征编码为紧凑的二进制哈希码。我们将这些代码用作哈希表中的钥匙,以实现实时邻居搜索和高度准确的检索。在此演示中,我们通过将米兰与Agoraeo内的浏览器和搜索引擎集成在一起来展示米兰的效率。地震支持卫星图像存储库上的交互式视觉探索和典型查询。演示访问者将与地震互动,扮演不同用户的角色,这些用户的角色通过其语义内容搜索图像,并通过其语义内容搜索并应用其他过滤器。
translated by 谷歌翻译
神经语言模型被广泛使用;但是,它们的模型参数通常需要适应时间和资源消耗的应用程序的特定域和任务。因此,最近引入了适配器作为模型适应的轻巧替代方案。它们由一组特定于任务的参数组成,这些参数缩短了训练时间和简单的参数组成。适配器训练和组成的简单性带来了新的挑战,例如保持适配器属性的概述,并有效地比较其生产的嵌入空间。为了帮助开发人员克服这些挑战,我们提供了双重贡献。首先,在与NLP研究人员的密切合作中,我们对支持适配器评估的方法进行了需求分析,并检测到了对固有的(即基于相似性的嵌入相似性)和外部(即基于预测的)解释方法的需求。 。其次,在收集的要求的激励下,我们设计了一个灵活的视觉分析工作空间,可以比较适配器属性。在本文中,我们讨论了几次设计迭代和替代方案,以进行交互式,比较视觉解释方法。我们的比较可视化表明,适应性嵌入媒介的差异和对​​各种人性化概念(例如,人的名字,人类素质)的预测结果。我们通过案例研究评估我们的工作空间,并表明,例如,根据Context-0(deNsTextualized)嵌入对语言偏见任务进行培训的适配器,引入了一种新型的偏见,其中单词(甚至与性别独立的单词)一样与女性代词更类似于女性。我们证明这些是上下文0嵌入的工件。
translated by 谷歌翻译
在自主和移动机器人技术中,主要挑战之一是对环境的坚强感知,通常是未知和动态的,例如自主无人机赛车。在这项工作中,我们提出了一种新型的基于神经网络的感知方法,用于赛车门检测 - 铅笔网 - 依赖于铅笔过滤器顶部的轻质神经网络骨架。这种方法统一了对盖茨的2D位置,距离和方向的预测。我们证明我们的方法对于不需要任何现实世界训练样本的零射击SIM到运行转移学习有效。此外,与最先进的方法相比,在快速飞行下通常看到的照明变化非常强大。一组彻底的实验证明了这种方法在多种挑战的情况下的有效性,在多种挑战性的情况下,无人机在不同的照明条件下完成了各种轨道。
translated by 谷歌翻译
几年来,深度学习方法已成功地应用于遥感问题。在这些方法中,基于CNN的模型在使用卫星或空中图像解决土地分类问题方面具有很高的精度。尽管这些模型的精度很高,但通常具有较大的内存要求。另一方面,希望拥有用于应用程序的小型型号,例如在无人机上实施的应用程序,并且记忆空间较低。不幸的是,小型CNN型号与其大型版本那样不提供高精度。在这项研究中,我们提出了一种新颖的方法,可以通过向其注入传统特征来提高CNN模型的准确性,尤其是尺寸较小的方法。为了测试所提出方法的有效性,我们将其应用于CNN模型Squeezenet,MobilenetV2,ShufflenetV2,VGG16和Resnet50V2,其大小为0.5 MB至528 MB。我们使用了样本平均值,灰度合作矩阵特征,HU矩,局部二进制图案,定向梯度的直方图和颜色不变性作为传统的注射特征。我们在EuroSat数据集上测试了提出的方法,以执行土地分类。我们的实验结果表明,所提出的方法显着提高了土地分类精度,尤其是应用于小型CNN模型时。
translated by 谷歌翻译